24 research outputs found

    Cloud Servers: Resource Optimization Using Different Energy Saving Techniques

    Get PDF
    Currently, researchers are working to contribute to the emerging fields of cloud computing, edge computing, and distributed systems. The major area of interest is to examine and understand their performance. The major globally leading companies, such as Google, Amazon, ONLIVE, Giaki, and eBay, are truly concerned about the impact of energy consumption. These cloud computing companies use huge data centers, consisting of virtual computers that are positioned worldwide and necessitate exceptionally high-power costs to preserve. The increased requirement for energy consumption in IT firms has posed many challenges for cloud computing companies pertinent to power expenses. Energy utilization is reliant upon numerous aspects, for example, the service level agreement, techniques for choosing the virtual machine, the applied optimization strategies and policies, and kinds of workload. The present paper tries to provide an answer to challenges related to energy-saving through the assistance of both dynamic voltage and frequency scaling techniques for gaming data centers. Also, to evaluate both the dynamic voltage and frequency scaling techniques compared to non-power-aware and static threshold detection techniques. The findings will facilitate service suppliers in how to encounter the quality of service and experience limitations by fulfilling the service level agreements. For this purpose, the CloudSim platform is applied for the application of a situation in which game traces are employed as a workload for analyzing the procedure. The findings evidenced that an assortment of good quality techniques can benefit gaming servers to conserve energy expenditures and sustain the best quality of service for consumers located universally. The originality of this research presents a prospect to examine which procedure performs good (for example, dynamic, static, or non-power aware). The findings validate that less energy is utilized by applying a dynamic voltage and frequency method along with fewer service level agreement violations, and better quality of service and experience, in contrast with static threshold consolidation or non-power aware technique

    Sickle Cell Disease (SCD)

    Get PDF
    Sickle cell anemia (SCA) is a disease that is caused by the formation of an abnormal hemoglobin type, which can bind with other abnormal hemoglobin molecules within the red blood cells (RBCs) to cause rigid distortion of the cell. This distortion prevents the cell from passing through small blood vessels; leading to occlusion of vascular beds, followed by tissue ischemia and infarction. Infarction is frequent all over the body in patients with SCA, leading to the acute pain crisis. Over time, such insults result in medullary bone infarcts and epiphyseal osteonecrosis. In the brain, cognitive impairment and functional neurologic deficits may occur due to white matter and gray matter infarcts. Infarction may also affect the lungs increasing susceptibility to pneumonia. The liver, spleen, and kidney may show infarction as well. Sequestration crisis is an unusual life-threatening complication of SCA, in which a significant amount of blood is sequestered in an organ (usually the spleen), leading to collapse. Lastly, since the RBCs are abnormal, they are destroyed, resulting in a hemolytic anemia. However, the ischemic complications in patients with SCA disease far exceed the anemia in clinical significance

    An Overview of Fog Computing and Edge Computing Security and Privacy Issues

    No full text
    With the advancement of different technologies such as 5G networks and IoT the use of different cloud computing technologies became essential. Cloud computing allowed intensive data processing and warehousing solution. Two different new cloud technologies that inherit some of the traditional cloud computing paradigm are fog computing and edge computing that is aims to simplify some of the complexity of cloud computing and leverage the computing capabilities within the local network in order to preform computation tasks rather than carrying it to the cloud. This makes this technology fits with the properties of IoT systems. However, using such technology introduces several new security and privacy challenges that could be huge obstacle against implementing these technologies. In this paper, we survey some of the main security and privacy challenges that faces fog and edge computing illustrating how these security issues could affect the work and implementation of edge and fog computing. Moreover, we present several countermeasures to mitigate the effect of these security issues

    Semi-Blind Channel Estimation for Intelligent Reflecting Surfaces in Massive MIMO Systems

    No full text
    Intelligent reflecting surface (IRS) is considered as a promising technology for enhancing the transmission rate in cellular networks. Such improvement is attributed to considering a large IRS with high number of passive reflecting elements, optimized to properly focus the incident beams towards the receiver. However, to achieve this beamforming gain, the channel state information (CSI) should be efficiently acquired at the base station (BS). Unfortunately, the traditional pilot estimation method is challenging, because the passive IRS does not have radio frequency (RF) chains and the number of channel coefficients is proportional to the number of IRS elements. In this paper, we propose a novel semi-blind channel estimation method where the reflected channels are estimated using not only pilot but also data symbols, reducing the channel estimation overhead. The performance of the system is analytically investigated in terms of the uplink achievable sum-rate. The proposed scheme achieves higher energy and spectrum efficiency while being robust to channel estimation errors. For instance, the proposed scheme achieves an 80% increase in spectrum efficiency compared to pilot-only based schemes, for IRSs with N=32N=32 elements

    A Symbols Based BCI Paradigm for Intelligent Home Control Using P300 Event-Related Potentials

    No full text
    Brain-Computer Interface (BCI) is a technique that allows the disabled to interact with a computer directly from their brain. P300 Event-Related Potentials (ERP) of the brain have widely been used in several applications of the BCIs such as character spelling, word typing, wheelchair control for the disabled, neurorehabilitation, and smart home control. Most of the work done for smart home control relies on an image flashing paradigm where six images are flashed randomly, and the users can select one of the images to control an object of interest. The shortcoming of such a scheme is that the users have only six commands available in a smart home to control. This article presents a symbol-based P300-BCI paradigm for controlling home appliances. The proposed paradigm comprises of a 12-symbols, from which users can choose one to represent their desired command in a smart home. The proposed paradigm allows users to control multiple home appliances from signals generated by the brain. The proposed paradigm also allows the users to make phone calls in a smart home environment. We put our smart home control system to the test with ten healthy volunteers, and the findings show that the proposed system can effectively operate home appliances through BCI. Using the random forest classifier, our participants had an average accuracy of 92.25 percent in controlling the home devices. As compared to the previous studies on the smart home control BCIs, the proposed paradigm gives the users more degree of freedom, and the users are not only able to control several home appliances but also have an option to dial a phone number and make a call inside the smart home. The proposed symbols-based smart home paradigm, along with the option of making a phone call, can effectively be used for controlling home through signals of the brain, as demonstrated by the results

    Optimal number of user antennas in a constrained pilot‐length massive MIMO system

    No full text

    Performance Evaluation of Different Decision Fusion Approaches for Image Classification

    No full text
    Image classification is one of the major data mining tasks in smart city applications. However, deploying classification models that have good generalization accuracy is highly crucial for reliable decision-making in such applications. One of the ways to achieve good generalization accuracy is through the use of multiple classifiers and the fusion of their decisions. This approach is known as “decision fusion”. The requirement for achieving good results with decision fusion is that there should be dissimilarity between the outputs of the classifiers. This paper proposes and evaluates two ways of attaining the aforementioned dissimilarity. One is using dissimilar classifiers with different architectures, and the other is using similar classifiers with similar architectures but trained with different batch sizes. The paper also compares a number of decision fusion strategies

    Machine-Learning-Based IoT–Edge Computing Healthcare Solutions

    No full text
    The data that medical sensors collect can be overwhelming, making it challenging to glean the most relevant insights. An algorithm for a body sensor network is needed for the purpose of spotting outliers in the collected data. Methods of machine learning and statistical sampling can be used in the research process. Real-time response optimization is a growing field, as more and more computationally intensive tasks are offloaded to the backend. Optimizing data transfers is a topic of study. Computing power is dispersed across many domains. Computation will become a network bottleneck as more and more devices gain Internet-of-Things capabilities. It is crucial to employ both task-level parallelism and distributed computing. To avoid running down the battery, the typical solution is to send the processing to a server in the background. The widespread deployment of Internet-of-Things (IoT) devices has raised serious privacy and security concerns among people everywhere. The rapid expansion of cyber threats has rendered our current privacy and security measures inadequate. Machine learning (ML) methods are gaining popularity because of the reliability of the results that they produce, which can be used to anticipate and detect vulnerabilities in Internet-of-Things-based systems. Network response times are improved by edge computing, which also increases decentralization and security. Edge nodes, which frequently communicate with the cloud, can now handle a sizable portion of mission-critical computation. Real-time, highly efficient solutions are possible with the help of this technology. To this end, we use a distributed-edge-computing-based Internet-of-Things (IoT) framework to investigate how cloud and edge computing can be combined with ML. IoT devices with sensor frameworks can collect massive amounts of data for subsequent analysis. The front-end component can benefit from some forethought in determining what information is most crucial. To accomplish this, an IoT server in the background can offer advice and direction. The idea is to use machine learning in the backend servers to find data signatures of interest. We intend to use the following ideas in the medical field as a case study. Using a distributed-edge-computing-based Internet-of-Things (IoT) framework, we are investigating how to combine the strengths of both cloud and edge computing with those of machine learning

    Finger-Gesture Recognition for Visible Light Communication Systems Using Machine Learning

    No full text
    Gesture recognition (GR) has many applications for human-computer interaction (HCI) in the healthcare, home, and business arenas. However, the common techniques to realize gesture recognition using video processing are computationally intensive and expensive. In this work, we propose to task existing visible light communications (VLC) systems with gesture recognition. Different finger movements are identified by training on the light transitions between fingers using the long short-term memory (LSTM) neural network. This paper describes the design and implementation of the gesture recognition technique for a practical VLC system operating over a distance of 48 cm. The platform uses a single low-cost light-emitting diode (LED) and photo-diode sensor at the receiver side. The system recognizes gestures from interruptions in the direct light transmission, and is therefore suitable for high-speed communication. Gesture recognition accuracies were conducted for five gestures, and results demonstrate that the proposed system is able to accurately identify the gestures in up to 88% of cases
    corecore